Goto

Collaborating Authors

 cortical representation


A Computer Simulation of Olfactory Cortex with Functional Implications for Storage and Retrieval of Olfactory Information

Neural Information Processing Systems

Based on anatomical and physiological data, we have developed a computer simulation of piri(cid:173) form (olfactory) cortex which is capable of reproducing spatial and temporal patterns of actual cortical activity under a variety of conditions. Using a simple Hebb-type learning rule in conjunc(cid:173) tion with the cortical dynamics which emerge from the anatomical and physiological organiza(cid:173) tion of the model, the simulations are capable of establishing cortical representations for differ(cid:173) ent input patterns. The basis of these representations lies in the interaction of sparsely distribut(cid:173) ed, highly divergent/convergent interconnections between modeled neurons. We have shown that different representations can be stored with minimal interference. Further, we have demonstrated that the degree of overlap of cortical representations for different stimuli can also be modulated.


A Biologically Plausible Algorithm for Reinforcement-shaped Representational Learning

Sahani, Maneesh

Neural Information Processing Systems

Significant plasticity in sensory cortical representations can be driven in mature animals either by behavioural tasks that pair sensory stimuli with reinforcement, or by electrophysiological experiments that pair sensory input with direct stimulation of neuromodulatory nuclei, but usually not by sensory stimuli presented alone. Biologically motivated theories of representational learning, however, have tended to focus on unsupervised mechanisms, which may play a significant role on evolutionary or developmental timescales, but which neglect this essential role of reinforcement in adult plasticity. By contrast, theoretical reinforcement learning has generally dealt with the acquisition of optimal policies for action in an uncertain world, rather than with the concurrent shaping of sensory representations. This paper develops a framework for representational learning which builds on the relative success of unsupervised generativemodelling accounts of cortical encodings to incorporate the effects of reinforcement in a biologically plausible way.


A Biologically Plausible Algorithm for Reinforcement-shaped Representational Learning

Sahani, Maneesh

Neural Information Processing Systems

Significant plasticity in sensory cortical representations can be driven in mature animals either by behavioural tasks that pair sensory stimuli with reinforcement, or by electrophysiological experiments that pair sensory input with direct stimulation of neuromodulatory nuclei, but usually not by sensory stimuli presented alone. Biologically motivated theories of representational learning, however, have tended to focus on unsupervised mechanisms, which may play a significant role on evolutionary or developmental timescales, but which neglect this essential role of reinforcement in adult plasticity. By contrast, theoretical reinforcement learning has generally dealt with the acquisition of optimal policies for action in an uncertain world, rather than with the concurrent shaping of sensory representations. This paper develops a framework for representational learning which builds on the relative success of unsupervised generativemodelling accounts of cortical encodings to incorporate the effects of reinforcement in a biologically plausible way.


A Biologically Plausible Algorithm for Reinforcement-shaped Representational Learning

Sahani, Maneesh

Neural Information Processing Systems

Significant plasticity in sensory cortical representations can be driven in mature animals either by behavioural tasks that pair sensory stimuli with reinforcement, or by electrophysiological experiments that pair sensory input with direct stimulation of neuromodulatory nuclei, but usually not by sensory stimuli presented alone. Biologically motivated theories of representational learning, however, have tended to focus on unsupervised mechanisms, which may play a significant role on evolutionary or developmental timescales,but which neglect this essential role of reinforcement in adult plasticity. By contrast, theoretical reinforcement learning has generally dealt with the acquisition of optimal policies for action in an uncertain world, rather than with the concurrent shaping of sensory representations. This paper develops a framework for representational learning which builds on the relative success of unsupervised generativemodelling accountsof cortical encodings to incorporate the effects of reinforcement in a biologically plausible way.